Goto

Collaborating Authors

 finite-dimensional space



1b33d16fc562464579b7199ca3114982-AuthorFeedback.pdf

Neural Information Processing Systems

Dear Reviewers, we would like to take this opportunity to thank you for your precise and constructive feedback. Citations refer to the bibliography of the paper. We will add these examples to the final version. A.: Theorem 1 does apply in finite-dimensional spaces. We will add this remark on the application to finite-dimensional spaces to the final version.


Combining physics-based and data-driven models: advancing the frontiers of research with Scientific Machine Learning

Quarteroni, Alfio, Gervasio, Paola, Regazzoni, Francesco

arXiv.org Artificial Intelligence

Scientific Machine Learning (SciML) is a recently emerged research field which combines physics-based and data-driven models for the numerical approximation of differential problems. Physics-based models rely on the physical understanding of the problem at hand, subsequent mathematical formulation, and numerical approximation. Data-driven models instead aim to extract relations between input and output data without arguing any causality principle underlining the available data distribution. In recent years, data-driven models have been rapidly developed and popularized. Such a diffusion has been triggered by a huge availability of data (the so-called big data), an increasingly cheap computing power, and the development of powerful machine learning algorithms. SciML leverages the physical awareness of physics-based models and, at the same time, the efficiency of data-driven algorithms. With SciML, we can inject physics and mathematical knowledge into machine learning algorithms. Yet, we can rely on data-driven algorithms' capability to discover complex and non-linear patterns from data and improve the descriptive capacity of physics-based models. After recalling the mathematical foundations of digital modelling and machine learning algorithms, and presenting the most popular machine learning architectures, we discuss the great potential of a broad variety of SciML strategies in solving complex problems governed by partial differential equations. Finally, we illustrate the successful application of SciML to the simulation of the human cardiac function, a field of significant socio-economic importance that poses numerous challenges on both the mathematical and computational fronts. The corresponding mathematical model is a complex system of non-linear ordinary and partial differential equations describing the electromechanics, valve dynamics, blood circulation, perfusion in the coronary tree, and torso potential. Despite the robustness and accuracy of physics-based models, certain aspects, such as unveiling constitutive laws for cardiac cells and myocardial material properties, as well as devising efficient reduced order models to dominate the extraordinary computational complexity, have been successfully tackled by leveraging data-driven models.


On the complexity of PAC learning in Hilbert spaces

Chubanov, Sergei

arXiv.org Artificial Intelligence

In general, the classification problem we are dealing with can be formulated as follows: Find a binary classifier which consistently classifies the training data such that the number of elementary operations involved in the description of the classifier is as small as possible. The intuition behind restricting the complexity of the classifier is based on the well-known Occam's razor principle; it has been proved (see e.g. Blumer et al. [1987, 1989]) that there is a relationship between the complexity of a classifier and its prediction accuracy, in the sense of the probably approximately correct (PAC) classification. We study this problem from the point of view of polyhedral classification, where the concept class to be learned consists of polyhedra in a given inner-product space. Polyhedral separability is always realizable by choosing a suitable kernel; i.e., our results are universally applicable to the general case of binary classification.


Latest Neural Nets Solve World's Hardest Equations Faster Than Ever Before

#artificialintelligence

In high school physics, we learn about Newton's second law of motion -- force equals mass times acceleration -- through simple examples of a single force (say, gravity) acting on an object of some mass. In an idealized scenario where the only independent variable is time, the second law is effectively an "ordinary differential equation," which one can solve to calculate the position or velocity of the object at any moment in time. But in more involved situations, multiple forces act on the many moving parts of an intricate system over time. To model a passenger jet scything through the air, a seismic wave rippling through Earth or the spread of a disease through a population -- to say nothing of the interactions of fundamental forces and particles -- engineers, scientists and mathematicians resort to "partial differential equations" (PDEs) that can describe complex phenomena involving many independent variables. The problem is that partial differential equations -- as essential and ubiquitous as they are in science and engineering -- are notoriously difficult to solve, if they can be solved at all.